School
of
Electronics and Communication Engineering
Senior Design Project Report
on
A Deep Learning framework for lane
detection algorithm on embedded ADAS
platform
By:
1. Aarya K.H USN: 01FE20BEC029
2. Lalita J.S USN: 01FE21BEC404
3. Vaidehi S.H USN: 01FE21BEC405
4. Sahana B.S USN: 01FE21BEC412
5. Abhishek V.S USN: 01FE21BEC416
Semester: VII, 2023-2024
Under the Guidance of
Prof. Gireesha H M
K.L.E SOCIETY’S
KLE Technological University,
HUBBALLI-580031
2023-24
SCHOOL OF ELECTRONICS AND COMMUNICATION
ENGINEERING
Certificate
This is to certify that project entitled A Deep Learning framework for lane detec-
tion algorithm on embedded ADAS platform” is a bonafide work carried out by
the student team of ”Aarya K.H(01FE20BEC029), Lalita J.S(01FE21BEC404), Vaidehi
S.H(01FE21BEC405), Sahana B.S(01FE21BEC412), Abhishek S (01FE21BEC416)”.
The project report has been approved as it satisfies the requirements concerning the senior
design project work prescribed by the university curriculum for BE (VII Semester) in the
School of Electronics and Communication Engineering of KLE Technological University
for the academic year 2023-2024.
Prof.Gireesha.H.M Dr.Suneeta V. Budihal Dr.Basavaraj.S.A
Guide Head of School Registrar
External Viva:
Name of Examiners Signature with date
1.
2.
Acknowledgement
Every project’s success is largely attributable to the work of numerous individuals who have
consistently provided their insightful counsel or extended a helping hand. We really appreciate
the motivation, assistance, and direction provided by everyone who helped to make this effort
a success. We would like to use this opportunity to express our gratitude to Dr.Suneeta V.
Budihal, Head of the School of Electronics and Communications Engineering, for providing us
with a learning environment that encouraged the development of our practical abilities, which
helped our project succeed.We would also want to take this time to thank Prof. Gireesha H.M.
for his constant supervision, direction, and provision of the information we needed to complete
the project.We are grateful and pleased to have received ongoing support, encouragement, and
guidance from SoECE’s teaching and non-teaching faculty, which allowed us to successfully
complete our project.We are grateful that the School of Electronics and Communications at
KLE Technological University gave us the tools we needed to finish this project.
-The Project Team 36
ABSTRACT
Modern vehicles are now equipped with the Lane Departure Warning System (LDWS), a critical
safety feature that alerts drivers when their vehicle veers off the road. Implemented with sensors
and cameras, LDWS relies on complex interactions among the vehicle, sensors, and control al-
gorithms, making its testing challenging. The Hardware-in-the-Loop (HIL) simulation emerges
as a potent testing method, allowing developers to simulate vehicle sensor activities in a con-
trolled environment. Through HIL testing, the LDWS system, integrated with a lane detection
deep learning framework and Nvidia Jetson Xavier AGX, can be rigorously evaluated across
diverse conditions, including varied road geometries, climatic scenarios, and traffic patterns.
This approach provides a realistic and reliable platform for comprehensive testing in a virtual
environment, facilitated by HIL validation using dSPACE Scalexio.
Contents
1 Introduction 2
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Organization of the Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 System Design 8
2.1 Functional Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Camera: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Architecture of Resnet-18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Embedded ADAS Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Hardware Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Implementation Details 13
3.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Road and Scenario generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Algorithm for Lane detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Implementation on ADAS Platform . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Results 16
4.1 Detection of Lane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Testing and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Conclusion and Future Scope 18
5.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
List of Figures
1.1 The SAE levels of automation [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Lane detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Functional block diagram of Lane detection . . . . . . . . . . . . . . . . . . . . . 8
2.2 Residual Netwok . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Architecture of ResNet-18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Jetson AGX Xavier kit[2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Urban road with different lane markings . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Fellows in Scenerio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Top view of scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1 Day Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Night Instance 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Night Instance 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4 Night Instance 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.5 Interfacing of ADAS ECU on dSPACE. . . . . . . . . . . . . . . . . . . . . . . . 17
4.6 Front view of scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.7 Birds eye view of scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chapter 1
Introduction
Advanced Driver Assistance Systems[ADAS] refers to a collection of features intended to aid
drivers and improve vehicle safety and comfort. Sensors like cameras,RADAR,LiDAR etc and
other advanced technology are used in ADAS to deliver features including lane departure warn-
ing, adaptive cruise control, autonomous emergency braking, blind spot recognition, and parking
assistance. These features aid in increasing driver awareness, decreasing the likelihood of acci-
dents, and improving overall driving experience as well as the comfort and safety of passengers
and driver. ADAS is a significant step towards the development of self-driving cars and the
future of transportation safety.
ADAS levels refer to a framework for categorising the capabilities and automation levels of auto-
motive systems. The levels are defined by the Society of Automotive Engineers (SAE) and give
a standardised means of describing a vehicle’s level of automation. The Society of Automotive
Engineers (SAE) founded in 1905 ,is a global automotive and aerospace engineering organization.
SAE’s century-long commitment to developing industry standards and encouraging worldwide
collaboration is reflected in its presence in 90 countries. SAE is well-known for its influence on
automotive performance, safety protocols, emissions laws, and materials engineering. Beyond
standards, SAE actively disseminates information through publications, educational programs,
and events that foster collaboration and innovation. SAE’s impact is amplified by technical com-
mittees and journals, which contribute to continuing discussions in vehicle dynamics, materials,
propulsion, and safety engineering. SAE’s standards, which are integrated into manufacturing
processes, promote excellence, safety, and regulatory compliance, confirming the organization’s
role as a driving force in automotive and aerospace growth. The SAE has classified automation
into six levels as shown in Figure 1.1
Figure 1.1: The SAE levels of automation [3]
1
These levels provide a common framework for understanding the increasing degrees of automa-
tion found in cars, from basic driver assistance features to completely autonomous driving ca-
pabilities. It facilitates open dialogue about the potential and constraints of automated car
systems, and it supports the advancement and implementation of ADAS technology.
The automotive industry uses the ASIL (automobile Safety Integrity Level) categorization sys-
tem to inspect and maintain the electrical and electronic systems of vehicles safe. ASIL levels
are defined by the international standard ISO 26262, which is for functional safety in motor
vehicles. ASIL classifications categorise the possible risk related to safety-related system flaws
or malfunctions. There are four ASIL levels:
ASIL A ,ASIL B ,ASIL C ,ASIL D
Lane departure warning systems, which are increasingly common in modern automobiles, im-
prove driver safety by detecting lane departures and giving visual and auditory alarms. Using
such systems necessitates the use of modern tools such as computer vision and image process-
ing. This report makes use of a deep learning framework to create a lane detection for lane
departure warning system. The framework functioning was verified using Hardware-in-the-Loop
(HIL) simulation via dSPACE Scalexio after the integration of a lane detection deep learning
framework and Nvidia Jetson Xavier AGX. This method provides a realistic and dependable
platform for evaluating system performance in a simulated environment.
Depending on the implementation, the LDW system is often classified as ASIL B or ASIL C
in terms of safety. It is typically seen as a driver assistance feature in Level 1 and Level 2
autonomous driving systems to increase lane-keeping capabilities and overall road safety.
Figure 1.2: Lane detection
dSPACE HIL Scalexio is a real-time modular system for Hardware-in-the-Loop (HIL) testing
and validation. It serves as a platform for simulating and testing complex control systems used
in a variety of applications such as automotive, aerospace, and industrial systems.It consists of a
processing unit, a variety of I/O modules, and software for system configuration and control. The
I/O modules are intended to interface with a variety of sensors and actuators found in control
systems, providing a realistic simulation environment for testing and validation. It supports
a variety of communication protocols, including CAN, LIN, and FlexRay, making it easy to
integrate with various control systems. In addition to HIL testing, can be used for rapid control
prototyping, providing a platform for developing and testing control algorithms in a real-time
environment
Overall, It is an effective tool because of its modular design and flexibility, it is appropriate for
a wide range of testing and validation applications.
2
dSPACE offers a range of software solutions tailored to specific testing and validation needs.
Among these, some prominent dSPACE desks include:
1.Model Desk: Designed for model development and testing, particularly in automotive and
aerospace systems, the Model Desk serves as a platform for creating and testing control models.
It supports a variety of modeling and simulation tools and can be seamlessly integrated with
other dSPACE desks, making it conducive for Hardware-in-the-Loop (HIL) testing and valida-
tion.
2.Motion Desk: This software is dedicated to testing and validating motion control systems used
in various applications, such as robotics, aerospace, and automotive systems. The Motion Desk
provides a testing environment for real-time testing and simulation of motion control systems
and is compatible with a range of I/O modules.
3.ControlDesk: A robust and versatile piece of software, ControlDesk offers an intuitive user
interface for configuring, managing, and operating HIL testing equipment. It supports multiple
data formats and can be customized to meet specific testing requirements.
4.Automation Desk: Tailored for automated control system testing and validation, Automation
Desk provides a platform for developing and executing automated test sequences. This includes
Software-in-the-Loop (SIL) testing and the utilization of virtual ECUs. Automation Desk can be
interconnected with other dSPACE desks for HIL testing and supports various test automation
frameworks.
The term ”ASM” [1]in the automotive industry refers to computer-based models and simulations
used for vehicle development, testing, and validation. ASM encompasses a diverse set of simu-
lation models representing different vehicle components, such as the powertrain, chassis, vehicle
dynamics, control systems, and driver behavior. These models, formulated using mathematical
equations, algorithms, and empirical data, allow engineers to virtually analyze and replicate
real-world driving scenarios.
Virtual tests and assessments are conducted through ASM, automotive experts can evaluate
aspects like vehicle performance, fuel efficiency, emissions, safety features, and overall function-
ality. This approach minimizes the need for physical prototypes and expensive field testing.
ASM models are often integrated into specialized software platforms or simulation environ-
ments, providing engineers with a comprehensive toolbox for parameterization, customization,
and analysis. This aids in optimizing vehicle design, performance, and efficiency.
The use of ASM in the automobile industry offers several advantages, including early issue
detection, optimization of vehicle systems, and the assessment of new technologies before imple-
mentation. Additionally, it contributes to cost and time savings during the development process
by facilitating quick and iterative simulations, allowing for efficient design changes.
The NVIDIA Jetson Xavier AGX is a powerful embedded computer module designed specifically
for AI and machine learning applications at the edge. It provides significant computational
capability for deep learning workloads thanks to a Volta GPU with Tensor Cores and an eight-
core ARM64 CPU. The module, which focuses on edge computing, enables real-time decision-
making in applications such as autonomous vehicles and robots. Its adaptability is demonstrated
by its support for a wide range of AI applications, and NVIDIA’s complete software development
kit simplifies the development process. The Jetson Xavier AGX is used in a variety of industries,
from healthcare to smart cities, demonstrating its ability to address challenging AI challenges
at the edge.
3
1.1 Motivation
As per a World Health Organization (WHO) report[4], road traffic accidents are a major global
cause of fatalities, resulting in an estimated 1.35 million deaths and 50 million injuries annually.
Lane departure emerges as a prevalent factor in highway accidents.The Ministry of Road Trans-
port and Highways in India reported a total of 4,61,312 road accidents were reported by States
and Union Territories (UTs) during the calendar year 2022, resulting in the loss of 1,68,491 lives
and causing injuries to 4,43,366 individuals. The number of road accidents in 2022 exhibited
an 11.9 percent increase compared to the previous year. Similarly, the statistics for deaths and
injuries due to road accidents also saw a rise by 9.4 percent and 15.3 percent, respectively .
These figures now average to 1,264 accidents and 462 deaths every day, or 53 accidents and 19
deaths every hour across the country
A study by Monash University Accident Research Centre reveals that lane departure acci-
dents contributed to 16% of all road fatalities and serious injuries in Australia from 2015 to 2019.
The study underscores the potential of Lane Departure Warning (LDW) systems in reducing
the risk of such accidents by up to 21%. In Europe, lane departure accidents represent about
12% of all road fatalities, according to the European Commission. Meanwhile, in the United
States, the National Highway Traffic Safety Administration (NHTSA) notes that lane departure
accidents claimed over 9,400 lives and injured 700,000 people in 2019.
This project’s motivation lies in developing a reliable and efficient Lane Departure Warning
(LDW) system integrated with a lane detection deep learning framework. The aim is to enhance
driving safety by alerting drivers to unintentional lane departures. Leveraging HIL testing with
dSPACE Scalexio, the project employs a real-time simulation environment to validate the LDW
system’s functionality and performance. The inclusion of the NVIDIA Jetson Xavier AGX
further ensures the system’s effectiveness, marking a significant advancement in mitigating the
impact of lane departure accidents on road safety.
1.2 Objectives
The objectives of this project are:
1.To develop a deep learning framework forlane detection , and assessing its efficacy in enhancing
driving safety while reducing the risk of accidents on urban roads with different lane marking
2.To create a road and scenario using dSPACE modeldesk and motiondesk.
3.To implement the algorithm on embedded ADAS platform.
4.To validate the functionality and performance of the algorithm using dSPACE HIL system.
4
1.3 Literature Survey
The low precision and real-time performance of conventional lane detecting systems for au-
tonomous vehicles is a problem that this study attempts to solve. A real-time deep lane de-
tection system that integrates CNN Encoder–Decoder and Long Short-Term Memory (LSTM)
networks is the proposed approach. While the LSTM network analyses previous data to improve
detection accuracy by reducing the impact of false alarms, the CNN Encoder extracts deep fea-
tures and decreases dimensionality. We investigated and assessed three network designs using
a dataset of 12,764 road photos in different scenarios. With an average accuracy of 96.36%,
recall of 97.54%, and F1-score of 97.42%, the hybrid CNN Encoder–LSTM–Decoder network,
which was developed on an NVIDIA Jetson Xavier NX supercomputer and incorporated into a
Lane-Departure–Warning–System (LDWS), displayed strong prediction performance[5].
The end-to-end deep learning system SwiftLane, which is intended for effective and instanta-
neous lane detection in intricate situations, is presented in this paper. With the use of curve
fitting, false positive suppression, and row-wise classification, SwiftLane outperforms previous
techniques in terms of speed and accuracy, achieving an astounding 411 frames per second of
inference on the CULane benchmark dataset. With a high inference speed of 56 frames per
second, the framework, optimised with TensorRT, allows real-time lane detection on an Nvidia
Jetson AGX Xavier embedded device[6].
This study presents a robust approach based on precise geometric estimate in highway settings
to handle lane recognition difficulties for car safety applications. Different from conventional
visual feature-based techniques, this algorithm is less sensitive to changes in weather, light, and
distance. It uses an innovative method that generates and verifies neighbouring lanes hypotheses
to ensure successful identification even in the face of changing environmental conditions. ’Cross
ratio’ and ’dynamic homography matrix estimate’ are employed by the algorithm to generate
neighbouring lane hypotheses with accuracy; no additional vehicle sensors or calibration is re-
quired. Exhibiting robustness against changes in illumination via simulations and 752 × 480
video sequences, the algorithm operates over six lanes, comprising the driving lane and two
neighbouring lanes[7].
In this paper, lane detection programmes for autonomous vehicles in airport areas are the focus
of a performance comparison of embedded systems for real-time video sequence processing. The
NVIDIA Jetson Nano, Raspberry Pi 4B, and NVIDIA Jetson Xavier AGX are among the tested
modules. The study specifically looks at NVIDIA Jetson modules’ maximum video stream
processing performance in a range of resolutions and power modes. The findings show that
NVIDIA Jetson modules have a large amount of processing power and can track lanes well even
when using less power. This study emphasises how well-suited NVIDIA Jetson platforms are for
tasks involving real-time video processing in autonomous vehicle applications[8].
This research presents a novel lane detection method that breaks from pixel-wise segmentation
techniques that are ill-suited to difficult conditions and speed limitations. The suggested formu-
lation, which greatly reduces computer costs, approaches lane detection as a row-based selection
issue utilising global features, drawing inspiration from human perception. To address difficult
situations, the approach makes use of a broad receptive field for global information and adds a
structural loss to explicitly characterise lane structures. The method achieves state-of-the-art
performance in terms of speed and accuracy, as shown by extensive trials on benchmark datasets.
A lightweight version of the methodology produces around 300 frames per second, which is four
times faster than prior state-of-the-art methods. There will be public access to the code[9].
5
In order to facilitate lane-keeping and provide lane departure warnings, this research presents a
unique spline-based lane recognition and tracking system. The method improves system stability
and robustness by modelling lane markers accurately and flexibly using an extended Kalman
filter and Catmull-Rom splines. Unlike conventional techniques, it tracks and models each
lane marking separately, without assuming lane parallelism or certain forms. In lengthy road
tests, the system successfully navigates difficult conditions including worn-out lane markers,
construction sites, and tight curves, demonstrating real-time performance on a basic PC with
WVGA resolution[10].
1.4 Problem statement
Develop a robust Deep Learning framework for Lane Detection, allowing for extensive testing
and evaluation on embedded ADAS platform.
1.5 Organization of the Report
The organization of the report is as follows :
In Chapter 2, the proposed system design, functional block diagram, architecture of Resnet-18
and the specification is mentioned. Implementation details, the dataset used, road and scenario
generation, the algorithm of the proposed model and dSPACE scenario in Chapter 3 , the result
discussion in Chapter 4, the conclusion and Future scope in Chapter 5.
6
Chapter 2
System Design
The process of defining a system’s modules, interfaces, components, and data in order to meet
predetermined requirements is known as system design. It can be viewed as the product devel-
opment application of systems theory. Conceptual, logical, and physical design are all aspects
of system design.
2.1 Functional Block Diagram
The functional block diagram is a graphical language for describing the function between input
and output variables in the programmable logic controller architecture. The lane detection
system is shown in Figure 2.1. Camera is used to capture the image/video of the lane, and each
Resnet-18 layers produces a set of feature maps as its output. Every feature map is associated
with a certain learnt feature or pattern seen in the input picture. It consists of majorly three
layers Pooling layer, SoftMax layer, Output layer for the features mapping. Here defining anchor
boxes involves specifying their sizes and aspect ratios based on the characteristics of the objects
you expect to detect in your dataset.
Figure 2.1: Functional block diagram of Lane detection
7
2.2 Camera:
A camera is a sensor which is used for gathering visual data from the environment or the
road. The input device is the camera, and it processes the photos or video frames it takes in
order to identify and evaluate the lanes on the road.
2.3 Architecture of Resnet-18
Residual learning addresses vanishing gradient by using skip connections that bypass one or more
layers. The idea behind this is to learn a residual function-the difference between the input and
output of a set of layers-rather than learning the entire mapping directly. Mathematically, x is
the input to a set of layers, and f(x) represents the output produced by these layers, the residual
function is defined as f(x)-x. The original mapping to be learned is then H(x)=f(x)+x.
Figure 2.2: Residual Netwok
ResNet-18 architecture consists of series of layers and the main layers are :
1. Input layer
2. Initial convolution layer
3. Global Average Pooling
4. Fully Connected Layer
5. Softmax Activation
Input layer:
Resnet-18 input layer expects image in the form of 3D tensor. The input size is set to 224×224×3.
where:
224 and 224 are the spatial dimensions of the input image (width and height, respectively).
3 represents the number of color channels (red, green, and blue) in the image, as ResNet-18 is
designed for RGB images.
Initial Convolution Layer:
ResNet (the one on the right) consists on one convolution and pooling step (on orange) followed
by 4 layers of similar behavior. It consists of series of layers and it uses the skip connection
8
Global Average Pooling:
Global Average Pooling is a pooling operation designed to replace fully connected layers in
classical CNN.
Softmax Activation:
In ResNet-18, the softmax activation is typically used in the final layer for classification tasks.
This activation function converts the raw output scores of the network into probabilities. It
converts each scores and normalizes them to obtain a probability distribution over the classes,
making it suitable for multi-class classification problems.
Figure 2.3: Architecture of ResNet-18
Residual Blocks: ResNet-18 is composed of residual blocks, which consist of several convolutional
layers. These residual blocks enable the network to discover recurrent connections, preventing
the vanishing gradient problem and allowing the training of more complex designs. ResNet-18’s
layer stack includes 18 layers, including convolutional, pooling, and fully connected layers. The
architecture is organized into multiple stages, each containing several residual blocks. Convolu-
tional, pooling, and fully linked layers are among the 18 layers that make up ResNet-18’s layer
stack. The architecture is divided into many stages, each of which includes a number of leftover
blocks.
Image Preprocessing: The input image or video frame is typically resized to a fixed size,
commonly a square resolution like 224x224 pixels. This step ensures consistency in input 13
dimensions across the network. This resizing step ensures consistency in input dimensions across
the network. Additionally, the image is normalized by subtracting the mean pixel values and
dividing by the standard deviation. Normalization helps to reduce the impact of varying lighting
conditions and enhances the model’s performance by bringing the pixel values to a standardized
range.
Normalization: The image is normalized by subtracting the mean pixel values and divid-
ing by the standard deviation. This process helps in reducing the impact of varying lighting
conditions and enhances the model’s performance. Video Frame Preprocessing:When process-
ing video frames, ResNet-18 usually operates on individual frames rather than directly utilizing
temporal information. However, techniques like temporal filtering or optical flow can be applied
to improve temporal consistency and enhance the performance of the lane detection system.
Temporal filtering methods can eliminate noise or jittery detections by considering information
from adjacent frames, while optical flow techniques can estimate the motion between frames,
providing additional temporal information for better lane detection
9
Batch Processing: ResNet-18 typically processes images or frames in batches. Instead of
passing one image or frame at a time through the network, a batch of images or frames is
processed simultaneously. Batch processing improves computational efficiency during inference
by taking advantage of parallel processing capabilities. This allows for faster inference times
and efficient resource utilization, making real-time lane detection feasible. The ResNet-18 ar-
chitecture, with its residual blocks and layer stack, provides the network with the capacity to
learn complex features and patterns for accurate lane detection. The image and video frame
preprocessing steps ensure consistent input representation and enhance the model’s robustness
to varying lighting conditions. By incorporating techniques like temporal filtering and utilizing
batch processing, ResNet-18 can effectively handle video input and deliver real-time lane detec-
tion capabilities with improved performance and computational efficiency.
2.4 Embedded ADAS Platform
The Jetson AGX Xavier is an advanced development kit that serves as an exceptional platform
for deploying computer vision and deep learning algorithms. It offers impressive specifications,
enabling developers to harness the power of AI and accelerate their applications. Here are the
key specifications of the Jetson AGX Xavier: The kit features an octal-core NVIDIA Carmel
ARMv8.2 CPU clocked at 2.26GHz. This high-performance CPU provides ample processing
power to handle complex computations and AI tasks efficiently.GPU with Tensor Cores: With
its 512-core Volta GPU, the Jetson AGX Xavier delivers exceptional graphics processing capabil-
ities. The GPU is equipped with 64 Tensor Cores, which accelerate deep learning computations
and enable efficient execution of neural network inference.To further enhance deep learning per-
formance, the development kit incorporates dual deep learning accelerators.
These accelerators enable faster execution of AI algorithms and contribute to the overall
efficiency of the system. The Jetson AGX Xavier provides 32GB of eMMC memory for data
storage, ensuring sufficient capacity for storing models, datasets, and intermediate results. It is
also equipped with 16GB of LPDDR4 RAM, which facilitates smooth and efficient multitasking
during AI computations.With 1GB of graphical memory, the kit can handle the demands of
graphicsintensive applications, such as computer vision and image processing tasks.The devel-
opment kit includes 40 GPIO pins, which offer versatile connectivity options for interfacing with
external devices and peripherals. This allows developers to integrate their projects with various
sensors, actuators, and other components.To support software development, the NVIDIA Jet-
pack SDK is utilized to boot up the Jetson AGX Xavier development kit. This comprehensive
SDK provides developers with a wide range of tools, libraries, and frameworks for AI application
development, including CUDA, cuDNN, TensorRT, and support for popular AI frameworks.
Figure 2.4: Jetson AGX Xavier kit[2]
10
2.5 Hardware Specifications
Specification of Nvidia Jetson AGX Xavier:
GPU: 512-core Volta GPU with 64 Tensor Cores
Memory: 32GB 256-bit LPDDR4x with a 137 GB/s memory bandwidth
Storage: 32GB eMMC 5.1
Operating System Support: Ubuntu-based Linux (NVIDIA Linux for Tegra)
11
Chapter 3
Implementation Details
3.1 Dataset
The dataset is a collection of images used for experimentation of lane detection.These images
were taken from TuSimple dataset which are used for the lane detection algorithm for experi-
mentation.
Tu-Simple Dataset: The TuSimple Lane Detection dataset that was released by TuSimple,
a firm that specialises in autonomous driving technologies. The sample images of road with
dashed, continuous lane markings is shown in the Figure 3.1. The purpose of this dataset is to
assist with lane detection in actual driving situations.
Annotations of lane boundaries: 14,336
Figure 3.1: Urban road with different lane markings
12
3.2 Road and Scenario generation
dSPACE MotionDesk is a real-time simulation data visualisation and analysis software tool.
It is intended to be used in conjunction with dSPACE HIL system, allowing users to create
interactive and dynamic visualisations of complex simulation models. MotionDesk is suitable
for a variety of applications, including automotive, aerospace, and industrial control systems.
The software has an easy-to-use interface that allows users to customise the visualisations by
adding 3D animations, graphs, and charts to represent simulation data in an intuitive and
interactive way. We used Motiondesk to create a Road network of a real time road present in
Hubli.
In the Figure 3.3 the road network is created is a 1:10 replica of the real road from KLE
Technological University to Hosur circle. Traffic signals and signs to make it as realistic as
possible, different vehicles, pedestrians and building to the scenario by giving them different
routes.
In the Figure 3.2 the blocks represent the vehicles (fellows) and pedestrians present in our
virtual environment. Each property of these fellows and their attributes can be modified to fit
our needs.
Figure 3.2: Fellows in Scenerio.
dSPACE MotionDesk is a real-time simulation data visualisation and analysis software tool.
It is intended to be used in conjunction with dSPACE’s hardware-in-the-loop (HIL) systems,
allowing users to create interactive and dynamic visualisations of complex simulation models.
MotionDesk is suitable for a variety of applications, including automotive, aerospace, and indus-
trial control systems. The software has an easy-to-use interface that allows users to customise
the visualisations by adding 3D animations, graphs, and charts to represent simulation data in
an intuitive and interactive way. In the Figure 3.3 we see the environment we created from
modeldesk in motiondesk,it is mainly used to visualize the environment created in modeldesk.
13
Figure 3.3: Top view of scenario.
3.3 Algorithm for Lane detection
The algorithm for lane detection using ResNet-18 architecture is implemented on Nvidia Jetson
AGX Xavier and validated on dSPACE HIL platform is shown below,
Algorithm 1 Lane Detection Algorithm using ResNet-18 and Nvidia Jetson AGX Xavier
1: Start with input image.
2: Apply ResNet-18 layers to produce a set of feature maps associated with learned
features.
3: Perform convolution and define anchor boxes.
4: Apply pooling layer, softmax layer, and output layer for feature mapping.
5: Utilize Nvidia Jetson AGX Xavier for implementation.
6: Detect multiple lanes using the implemented algorithm.
3.4 Implementation on ADAS Platform
The Deep Learning framework is implemented and tested on Nvidia Jetson AGX Xavier board,
which is utilised in many applications such as autonomous vehicles.The algorithm is tested on
this platform using python file and distributions for executing the algorithm.
The ADAS platform uses the dedicated Operating system ,i.e. Nvidia OS is built on the Linux
Kernel v5.15, has a root file system based on Ubuntu v22.04, a UEFI bootloader, and OP-TEE
as Trusted Execution Environment.
14
Chapter 4
Results
This chapter contains the results from our methodology, a discussion of the results, our road
design and scenario, and a simulated scenario. The lane mark detection framework looks to be
performing as intended.This algorithm necessitates the gradual adjustment of multiple param-
eters.
4.1 Detection of Lane
The Deep Learning framework methodology used for lane detection in Urban scenario where we
considered day and night fir experimentation of the dataset on local roads.
Day Instance: In daylight, the algorithm accurately detects and marks the lanes in the image
as shown in Figure 4.1 .
Night Instance 1: At night, the algorithm accurately detects and marks the lanes in the image
as shown in Figure 4.2.
Night Instance 2: At night, the algorithm is unable to detect all the lanes as the lane is
invisible as shown in Figure 4.3.
Night Instance 3: In low-light conditions, the algorithm detects single lane due to insufficent
lighting as shown in Figure 4.4.
Figure 4.1: Day Instance Figure 4.2: Night Instance 1
Figure 4.3: Night Instance 2 Figure 4.4: Night Instance 3
15
4.2 Testing and Validation
The lane detection algorithm is deployed on ADAS ECU and this is interfaced with dSPACE
HIL system as shown in Figure 4.5.This is interfaced with CAN communication to test and
validate the algorithm ,testing this algorithm on the HIL platform is done to accurately detect
lanes and test it on different scenarios.
Figure 4.5: Interfacing of ADAS ECU on dSPACE.
The Figure 4.6 and Figure 4.7 represent the environment we created from dSPACE modeldesk
in dSPACE motiondesk.
Figure 4.6: Front view of scenario.
Figure 4.7: Birds eye view of scenario.
16
Chapter 5
Conclusion and Future Scope
5.1 Conclusion
In conclusion, the lane line identification algorithm created in this study has shown good results
in tracking and detecting lane lines during real-world driving scenarios. Key phases in the
algorithm pipeline include using ResNet-18 layers to generate feature maps, establishing anchor
boxes through convolution, using pooling layers, softmax layers, and an output layer for feature
mapping, and incorporating deep learning. Further optimization and modifications in these
steps can improve the resilience and accuracy of the strategy. However, it is critical to recognize
the disadvantages of depending entirely on a single algorithm for lane line recognition. In
practice, achieving robustness necessitates the use of many backup methods, particularly in the
case of self-driving automobiles.In the event of a failure in the primary algorithm, these fallback
algorithms should be well-prepared to maintain system robustness.
5.2 Future Scope
In the future, the integration of a lane detection deep learning framework, specifically exploiting
the capabilities of the Nvidia Jetson Xavier AGX, has the potential to greatly improve the algo-
rithm’s performance. The versatility and computational power of the Nvidia Jetson Xavier AGX
platform can help with real-time processing, enhancing overall responsiveness and reliability in
lane line detection. Future study may potentially investigate the incorporation of additional
sensor data and advanced machine learning techniques to improve the algorithm’s adaptability
and resilience in a variety of driving scenarios.
17
Bibliography
[1] dSPACE ASM. https://www.dspace.com/en/inc/home/products/sw/automotive_
simulation_models.cfm#176_26311.
[2] Nvidia AGX Xavier Kit. https://www.nvidia.com/en-in/autonomous-machines/
embedded-systems/jetson-agx-xavier/.
[3] SAE levels. https://www.faistgroup.com/news/autonomous-vehicles-levels/.
[4] WHO report on Road traffic injuries. https://www.who.int/health-topics/
road-safety#tab=tab_1.
[5] Deep embedded hybrid cnn–lstm network for lane detection on nvidia jetson xavier nx.
Knowledge-Based Systems, 240:107941, 2022.
[6] Oshada Jayasinghe, Damith Anhettigama, Sahan Hemachandra, Shenali Kariyawasam,
Ranga Rodrigo, and Peshala Jayasekara. Swiftlane: Towards fast and efficient lane detec-
tion. In 2021 20th IEEE International Conference on Machine Learning and Applications
(ICMLA), pages 859–864, 2021.
[7] Seung-Nam Kang, Soomok Lee, Junhwa Hur, and Seung-Woo Seo. Multi-lane detection
based on accurate geometric lane estimation in highway scenarios. In 2014 IEEE Intelligent
Vehicles Symposium Proceedings, pages 221–226, 2014.
[8] Kacper Podbucki, Jakub Suder, Tomasz Marciniak, and Adam Dabrowski. Evaluation of
embedded devices for real- time video lane detection. In 2022 29th International Conference
on Mixed Design of Integrated Circuits and System (MIXDES), pages 187–191, 2022.
[9] Zequn Qin, Huanyu Wang, and Xi Li. Ultra fast structure-aware deep lane detection, 2020.
[10] Kun Zhao, Mirko Meuter, Christian Nunn, Dennis M¨uller, Stefan M¨uller-Schneiders, and
Josef Pauli. A novel multi-lane detection and tracking system. In 2012 IEEE Intelligent
Vehicles Symposium, pages 1084–1089, 2012.
18